Susan Holmes and Wolfgang Huber
July, 6 2017
Susan Holmes and Wolfgang Huber (c) July, 8 2017
It is current practice to measure many variables on the same patients, we may have all the biometrical characteristics, height, weight, BMI, age as well as clinical variables such as blood pressure, blood sugar, heart rate for 100 patients, these variables will not be independent.
To start off, a useful toy example we’ll use is from the sports world; performances of decathlon athletes.
These are measurements of athletes’ performances in the decathlon: the variables m100, m400, m1500 are performance times in seconds for the 100 metres, 400 metres and 1500 meters respectively, ‘m110‘ is the time taken to finish the 110 meters hurdles whereas pole is the pole-jump height, weight is the length in metres the athletes threw the weight.
| m100 | long | weight | highj | m400 | m110 | disc | pole | jave | m1500 | |
|---|---|---|---|---|---|---|---|---|---|---|
| 1 | 11.25 | 7.43 | 15.48 | 2.27 | 48.90 | 15.13 | 49.28 | 4.70 | 61.32 | 268.95 |
| 2 | 10.87 | 7.45 | 14.97 | 1.97 | 47.71 | 14.46 | 44.36 | 5.10 | 61.76 | 273.02 |
| 3 | 11.18 | 7.44 | 14.20 | 1.97 | 48.29 | 14.81 | 43.66 | 5.20 | 64.16 | 263.20 |
| 4 | 10.62 | 7.38 | 15.02 | 2.03 | 49.06 | 14.72 | 44.80 | 4.90 | 64.04 | 285.11 |
diabetes=read.table(url("http://bios221.stanford.edu/data/diabetes.txt"),header=TRUE,row.names=1)
diabetes[1:4,]## relwt glufast glutest steady insulin Group
## 1 0.81 80 356 124 55 3
## 3 0.94 105 319 143 105 3
## 5 1.00 90 323 240 143 3
## 7 0.91 100 350 221 119 3
Operational Taxon Unit read counts in a microbial ecology study; the columns represent different ‘species’ of bacteria, the rows are labeled for the samples.
469478 208196 378462 265971 570812
EKCM1.489478 0 0 2 0 0
EKCM7.489464 0 0 2 0 2
EKBM2.489466 0 0 12 0 0
PTCM3.489508 0 0 14 0 0
EKCF2.489571 0 0 4 0 0Here are some RNA-seq transcriptomic data showing numbers of mRNA reads present for different patient samples, the rows are patients and the columns are the genes.
FBgn0000017 FBgn0000018 FBgn0000022 FBgn0000024 FBgn0000028 FBgn0000032
untreated1 4664 583 0 10 0 1446
untreated2 8714 761 1 11 1 1713
untreated4 3150 310 0 3 0 672
treated1 6205 722 0 10 0 1698
treated3 3334 308 0 5 1 757Mass spectroscopy data where we have samples containing informative labels (knockout versus wildtype mice) and protein \(\times\) features designated by their m/z number.
mz 129.9816 72.08144 151.6255 142.0349 169.0413 186.0355
KOGCHUM1 60515 181495 0 196526 25500 51504.40
WTGCHUM1 252579 54697 412 487800 48775 130491.15
WTGCHUM2 187859 56318 46425 454226 45626 100845.01#######Melanoma/Tcell Data: Peter Lee, Susan Holmes, PNAS.
load(url("http://bios221.stanford.edu/data/Msig3transp.RData"))
round(Msig3transp,2)[1:5,1:6]## X3968 X14831 X13492 X5108 X16348 X585
## HEA26_EFFE_1 -2.61 -1.19 -0.06 -0.15 0.52 -0.02
## HEA26_MEM_1 -2.26 -0.47 0.28 0.54 -0.37 0.11
## HEA26_NAI_1 -0.27 0.82 0.81 0.72 -0.90 0.75
## MEL36_EFFE_1 -2.24 -1.08 -0.24 -0.18 0.64 0.01
## MEL36_MEM_1 -2.68 -0.15 0.25 0.95 -0.20 0.17
#require(MSBdata)
turtles=read.table(url("http://bios221.stanford.edu/data/PaintedTurtles.txt"),header=TRUE)
turtles[1:4,]## sex length width height
## 1 f 98 81 38
## 2 f 103 84 38
## 3 f 103 86 42
## 4 f 105 86 40
## mean.L hatch.m clutch.S age.mat clutch.F
## Sa 69.2 0.572 6.0 13 1.5
## Sh 48.4 0.310 3.2 5 2.0
## Tl 168.4 2.235 16.9 19 1.0
## Mc 66.1 0.441 7.2 11 1.5
It is always beneficial to start a multidimensional analysis by checking the simple one dimensional and two dimensional summary statistics, we can visualize these using a graphics package that builds on ‘ggplot2‘ called ‘GGally‘.
What do we mean by low dimensional ?
flatland
If we are studying only one variable, just one column of our matrix, we might call it \({\mathbf x}\) or \({\mathbf x}_{\bullet j}\); we call it one dimensional.
A one dimensional summary a histogram that shows that variable’s distribution, or we could compute its mean \(\bar{x}\) or median, these are zero-th dimensional summaries of one dimension data.
In lecture 3 we studied two dimensional scatterplots.
When considering two variables (\(x\) and \(y\)) measured together on a set of observations, the correlation coefficient measures how the variables co-vary.
This is a single number summarizes two dimensional data, its formula involves \(\bar{x}\) and \(\bar{y}\): \[\hat{\rho}=\frac{\sum_{i=1}^n (x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_{i=1}^n (x_i-\bar{x})^2} \sqrt{\sum_{i=1}^n (y_i-\bar{y})^2}} \label{eq:corrcoeff}\]
## length width height
## length 1.000 0.978 0.965
## width 0.978 1.000 0.961
## height 0.965 0.961 1.000
Pairs plot for turtles data
Pairs athletes
Heatmap athletes
We usually center the cloud of points around the origin; the most common way of doing this is to make new variables whose means are all zero.
More robust scaling can be done also (median).
Different variables are measured in different units, and at different scales, and so would be hard to compare in their original form.
## Welcome! Related Books: `Practical Guide To Cluster Analysis in R` at https://goo.gl/13EFCZ
## length width height
## 20.48 12.68 8.39
## length width height
## 124.7 95.4 46.3
Transform the data: standardizing the data to a common standard deviation is the usual transformation. As in the correlation coefficient.
This rescaling is done using the scale function which makes every column have a variance of 1.
turtleMatScale=scale(turtles[,-1])
scaledturtles=data.frame(turtleMatScale,sex=turtles[,1])
apply(scaledturtles[,-4],2,mean)## length width height
## -1.43e-18 1.94e-17 -2.87e-16
## length width height
## 1 1 1
Invented in 1901 by Karl Pearson as a way to reduce a two variable scatterplot to a single coordinate.
Used by statisticians in the 1930s to summarize a battery of psychological tests run on the same subjects Hotelling:1933, extracting overall scores that could summarize many variables at once.
It is called Principal Component Analysis (abbreviated PCA).
Not principled
PCA is an ‘unsupervised learning technique’ because it treats all variables as having the same status.
PCA is visualization technique which produces maps of both variables and observations.
We are going to give you a flavor of what is called multivariate analyses. As a useful first approximation we formulate many of the methods through manipulations called linear algebra.
The raison d’être for multivariate analyses is connections or associations between the different variables.
If the columns of the matrix are unrelated, we should just study each column separately and do standard univariate statistics on them one by one.
Use geometry:
projectionvector
Here we show one way of projecting two dimensional data onto a line.
The olympic data come from the ade4 package, they are the performances of decathlon athletes in an olympic competition.
Scatterplot of two variables showing projection on the x coordinate in red.
In general, we lose information about the points when we project down from two dimensions (a plane) to one (a line).
If we do it just by using the original coordinates, for instance the x coordinate as we did above, we lose all the information about the second one.
There are actually many ways of projecting the point cloud onto a line. One is to use what are known as regression lines. Let’s look at these lines and how there are constructed in R:
attach(athletes)
require(ggplot2)
reg1 <- lm(disc~weight,data=athletes)
#abline(reg1, col='red')
a <- reg1$coefficients[1] # Intercept
b <- reg1$coefficients[2] # slope
pline=p+geom_abline(intercept=a,slope=b, col="blue")
proj=pline+geom_segment(aes(x=weight, xend=weight, y=disc,
yend=reg1$fitted),linetype=1,colour="red",
arrow = arrow(length = unit(0.15,"cm")))
print(proj)The blue line minimizes the sum of squares of the vertical residuals (in red),
What is the variance of the points along the blue line?
## [1] 1.65
Variance of points
## [1] 1.65
The orange line minimizes the horizontal residuals for the weight variable in orange.
xy=cbind(athletes$disc,athletes$weight)
svda=svd(xy)
pc = xy %*% svda$v[,1] %*% t(svda$v[,1])
bp = svda$v[2,1] /svda$v[1,1]
ap = mean(pc[,2])-bp*mean(pc[,1])
p+geom_segment(xend=pc[,1],yend=pc[,2])+
geom_abline(intercept=ap,slope=bp, col="purple",lwd=1.5)The purple line minimizes both residuals and thus (through Pythagoras) it minimizes the sum of squared distances from the points to the line.
Minimizing the distance to the line in both directions, the purple line is the principal component line, the green and blue line are the regression lines.
The lines created here are sensitive to the choice of units; because we have made the standard deviations equal to one for both variables, the PCA line is the diagonal that cuts exactly in the middle of both regression lines.
The data were centered by subtracting their means thus ensuring that the line passes through the origin \((0,0)\).
Compute the variance of the points on the purple line.
The coordinates of the points when we made the plot, these are in the pc vector:
## [1] 0.903 0.903
## [1] 1.81
ppdf = data.frame(PC1n=-svda$u[,1]*svda$d[1],PC2n=svda$u[,2]*svda$d[2])
ggplot(ppdf,aes(x=PC1n,y=PC2n))+geom_point()+ ylab("PC2 ")+
geom_hline(yintercept=0,color="purple",lwd=1.5,alpha=0.5) +
geom_point(aes(x=PC1n,y=0),color="red")+ xlab("PC1 ")+
xlim(-3.5, 2.7)+ylim(-2,2)+coord_fixed() +
geom_segment(aes(xend=PC1n,yend=0), color="red")The line created here is sensitive to the choice of units, and to the center of the cloud.
Note that Pythagoras’ theorem tells us two interesting things here, if we are minimizing in both horizontal and vertical directions we are in fact minimizing the diagonal projections onto the line from each point.
To understand what that a linear combination really is, we can take an analogy, when making a healthy juice mix, you can follow a recipe.
image
image
\[V=2\times \mbox{ Beets }+ 1\times \mbox{Carrots } +\frac{1}{2} \mbox{ Gala}+ \frac{1}{2} \mbox{ GrannySmith} +0.02\times \mbox{ Ginger} +0.25 \mbox{ Lemon }\] This recipe is a linear combination of individual juice types, in our analogy these are replaced by the original variables. The result is a new variable, the coefficients \((2,1,\frac{1}{2},\frac{1}{2},0.02,0.25)\) are called the loadings.
A linear combination of variables defines a line in our space in the same way we say lines in the scatterplot plane for two dimensions. As we saw in that case, there are many ways to choose lines onto which we project the data, there is however a ‘best’ line for our purposes.
Total variance can de decomposed The total sums of squares of the distances between the points and any line can be decomposed into the distance to the line and the variance along the line.
We saw that the principal component minimizes the distance to the line, and it also maximizes the variance of the projections along the line.
What is this?
MysteryImage
Which projection do you think is better?
It’s the projection that maximizes the area of the shadow and an equivalent measurement is the sums of squares of the distances between points in the projection, we want to see as much of the variation as possible, that’s what PCA does.
Many Choices have to made during PCA processing.
PCA is based on the principle of finding the largest axis of inertia/variability and then iterating to find the next best axis which is orthogonal to the previous one and so on.
Eigenvalues of X’X or Singular values of X tell us the rank.
What does rank mean?
X | 2 4 8
---| --------
1 |
2 |
3 |
4 |
X | 2 4 8
-- | ---------
1 | 2
2 | 4
3 | 6
4 | 8
X | 2 4 8
---| --------
1 | 2 4 8
2 | 4 8 16
3 | 6 12 24
4 | 8 16 32
We say that the matrix \[\begin{pmatrix} 2&4&8\\ 4&8&16\\ 6 &12&24\\ 8&16&32\\ \end{pmatrix} \] is of rank one.
\[\begin{pmatrix} 2&4&8\\ 4&8&16\\ 6 &12&24\\ 8&16&32\\ \end{pmatrix}= =u * t(v)= u * v', \qquad u =\left(\begin{smallmatrix} 1 \\2\\ 3\\ 4 \end{smallmatrix}\right) \mbox{ and } v'=t(v)=(2\; 4\; 8) . \]
## [,1] [,2] [,3] [,4]
## [1,] 780 936 1300 728
## [2,] 75 90 125 70
## [3,] 540 648 900 504
## [1] 1.01
## [1] 1.06
## [,1] [,2] [,3] [,4]
## [1,] 751.4 939 1315 751.4
## [2,] 93.9 117 164 93.9
## [3,] 563.6 704 986 563.6
## [,1] [,2] [,3] [,4]
## [1,] 28.6 -3.28 -15.0 -23.4
## [2,] -18.9 -27.41 -39.4 -23.9
## [3,] -23.6 -56.46 -86.2 -59.6
# <img src="/Users/susan/Dropbox/images/testsmallmosaic.jpg" alt="Decompose1" style="width: 400px;"/>
#
# Matrix $X$ we would like to decompose.
#
# <img src="/Users/susan/Books/CUBook/images/SVD-mosaicXplot1.png" alt="Decompose1" style="width: 400px;"/>
#
# Areas are proportional to the entries
#
# <img src="/Users/susan/Books/CUBook/images/SVD-mosaicXplot2.png" alt="Decompose2" style="width: 400px;"/>
#
# Looking at different possible margins
#
# <img src="/Users/susan/Books/CUBook/images/SVD-mosaicXplot3.png" alt="Decompose3" style="width: 400px;"/>Forcing the margins to have norm \(1\)
## ----checkX--------------------------------------------------------------
u1=c(0.8196, 0.0788, 0.5674)
v1=c(0.4053, 0.4863, 0.6754, 0.3782)
s1=2348.2
s1*u1 %*%t(v1)## [,1] [,2] [,3] [,4]
## [1,] 780 936 1300 728
## [2,] 75 90 125 70
## [3,] 540 648 900 504
Xsub=matrix(c(12.5 , 35.0 , 25.0 , 25,9,14,26,18,16,21,49,
32,18,28,52,36,18,10.5,64.5,36),ncol=4,byrow=T)
Xsub## [,1] [,2] [,3] [,4]
## [1,] 12.5 35.0 25.0 25
## [2,] 9.0 14.0 26.0 18
## [3,] 16.0 21.0 49.0 32
## [4,] 18.0 28.0 52.0 36
## [5,] 18.0 10.5 64.5 36
## $d
## [1] 1.35e+02 2.81e+01 3.10e-15 1.85e-15
##
## $u
## [,1] [,2] [,3] [,4]
## [1,] -0.344 0.7717 0.5193 -0.114
## [2,] -0.264 0.0713 -0.3086 -0.504
## [3,] -0.475 -0.0415 -0.0386 0.803
## [4,] -0.528 0.1426 -0.6423 -0.103
## [5,] -0.554 -0.6143 0.4702 -0.280
##
## $v
## [,1] [,2] [,3] [,4]
## [1,] -0.250 0.0404 -0.967 0.0244
## [2,] -0.343 0.8798 0.133 0.3010
## [3,] -0.755 -0.4668 0.186 0.4214
## [4,] -0.500 0.0808 0.111 -0.8551
## ----CheckUSV------------------------------------------------------------
Xsub-(135*USV$u[,1]%*%t(USV$v[,1]))## [,1] [,2] [,3] [,4]
## [1,] 0.8802 19.05 -10.088 1.7604
## [2,] 0.0849 1.76 -0.921 0.1698
## [3,] -0.0396 -1.01 0.565 -0.0792
## [4,] 0.1698 3.53 -1.842 0.3397
## [5,] -0.6877 -15.15 8.069 -1.3754
## [,1] [,2] [,3] [,4]
## [1,] 0.00387 -0.02528 0.0335 0.00774
## [2,] 0.00398 0.00264 0.0140 0.00796
## [3,] 0.00749 0.01192 0.0214 0.01498
## [4,] 0.00796 0.00527 0.0281 0.01592
## [5,] 0.00983 0.03784 0.0123 0.01965
## [,1] [,2] [,3] [,4]
## [1,] 7.22e-15 -1.07e-14 8.88e-15 4.88e-15
## [2,] 2.04e-15 -6.00e-15 1.05e-14 3.22e-15
## [3,] 2.87e-15 -9.55e-15 1.55e-15 6.23e-15
## [4,] 4.39e-15 -5.77e-15 1.78e-14 7.05e-15
## [5,] 5.11e-15 -1.78e-15 1.78e-14 1.78e-14
Xsub=matrix(c(12.5 , 35.0 , 25.0 , 25,9,14,26,18,16,21,49,32,18,28,52,36,18,10.5,64.5,36),ncol=4,byrow=T)
Xsub## [,1] [,2] [,3] [,4]
## [1,] 12.5 35.0 25.0 25
## [2,] 9.0 14.0 26.0 18
## [3,] 16.0 21.0 49.0 32
## [4,] 18.0 28.0 52.0 36
## [5,] 18.0 10.5 64.5 36
## $d
## [1] 1.35e+02 2.81e+01 3.10e-15 1.85e-15
##
## $u
## [,1] [,2] [,3] [,4]
## [1,] -0.344 0.7717 0.5193 -0.114
## [2,] -0.264 0.0713 -0.3086 -0.504
## [3,] -0.475 -0.0415 -0.0386 0.803
## [4,] -0.528 0.1426 -0.6423 -0.103
## [5,] -0.554 -0.6143 0.4702 -0.280
##
## $v
## [,1] [,2] [,3] [,4]
## [1,] -0.250 0.0404 -0.967 0.0244
## [2,] -0.343 0.8798 0.133 0.3010
## [3,] -0.755 -0.4668 0.186 0.4214
## [4,] -0.500 0.0808 0.111 -0.8551
## [,1] [,2] [,3] [,4]
## [1,] 0.8748 19.05 -10.104 1.750
## [2,] 0.0808 1.76 -0.933 0.162
## [3,] -0.0470 -1.02 0.543 -0.094
## [4,] 0.1616 3.52 -1.866 0.323
## [5,] -0.6963 -15.16 8.043 -1.393
## [,1] [,2] [,3] [,4]
## [1,] 7.22e-15 -1.07e-14 8.88e-15 4.88e-15
## [2,] 2.04e-15 -6.00e-15 1.05e-14 3.22e-15
## [3,] 2.87e-15 -9.55e-15 1.55e-15 6.23e-15
## [4,] 4.39e-15 -5.77e-15 1.78e-14 7.05e-15
## [5,] 5.11e-15 -1.78e-15 1.78e-14 1.78e-14
## Eye
## Hair Brown Blue Hazel Green
## Black 36 9 5 2
## Brown 66 34 29 14
## Red 16 7 7 7
## Blond 4 64 5 8
## Warning in chisq.test(HairColor): Chi-squared approximation may be incorrect
##
## Pearson's Chi-squared test
##
## data: HairColor
## X-squared = 100, df = 9, p-value <2e-16
prows=sweep(HairColor,1,apply(HairColor,1,sum),"/")
pcols=sweep(HairColor,2,apply(HairColor,2,sum),"/")
Indep=313*as.matrix(prows)%*%t(as.matrix(pcols))
round(Indep)## Hair
## Hair Black Brown Red Blond
## Black 72 158 39 44
## Brown 57 154 40 61
## Red 55 155 44 59
## Blond 28 108 27 149
## [1] 799
## ------------------------------------------------------------------------
diabetes.svd=svd(scale(diabetes[,-5]))
names(diabetes.svd)## [1] "d" "u" "v"
## [1] 20.09 13.38 9.89 5.63 1.70
## [1] 11.75 1.42 1.00
\[{\Large X\qquad = u_{\bullet 1} * s_1 * v_{\bullet 1}\qquad + \qquad u_{\bullet 2} * s_2 * v_{\bullet 2}\qquad + \qquad u_{\bullet 3} * s_3 * v_{\bullet 3}+}\]
We write our horizontal/vertical decompostion of the matrix \(X\) in short hand as: \[{\Large X = USV', V'V=I, U'U=I, S} \mbox{ diagonal matrix of singular values, given by the {\tt d} component in the R function}\] The crossproduct of X with itself verifies \[{\Large X'X=VSU'USV'=VS^2V'=V\Lambda V'}\] where \(V\) is called the eigenvector matrix of the symmetric matrix \(X'X\) and \(\Lambda\) is the diagonal matrix of eigenvalues of \(X'X\).
The singular vectors from the singular value decomposition, svd function above tell us the coefficients to put in front of the old variables to make our new ones with better properties. We write this as : \[PC_1=c_1 X_{\bullet 1} +c_2 X_{\bullet 2}+ c_3 X_{\bullet 3}+\cdots c_p X_{\bullet p}\]
Replace \(X_{\bullet 1},X_{\bullet 2}, \ldots X_{\bullet p}\) by \[PC_1, PC_2, \ldots PC_k\]
Suppose we have 5 samples with 23,000 genes measured on them, what is the dimensionality of these data?
The number of principal components is less than or equal to the number of original variables. \[K\leq min(n,p)\]
The geometr(ies) of data: good trick look at size of vectors:
The Principal Component transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each successive component in turn has the highest variance possible under the constraint that it be orthogonal to the preceding components. \[\max_{aX} \mbox{var}(Proj_{aX} (X))\]
Suppose the matrix of data \(X\) has been made to have column means 0 and standard deviations 1.
We call the principal components the columns of the matrix, \(C=US\).
The columns of U (the matrix given as USV$u in the output from the svd function above) are rescaled to have norm \(s^2\), the variance they are responsable for.
If the matrix \(X\) comes from the study of \(n\) different samples or specimens, then the principal components provides new coordinates for these \(n\) points these are sometimes also called the scores in some of the (many) PCA functions available in R (princomp,prcomp,dudi.pca in ade4).
If we only want the first one then it is just \(c_1=s_1 u_1\).
Variance explained by first principal component: \(s_1^2\):
Notice that \(||c_1||^2=s_1'u_1 u_1' s_1= s_1^2 u_1'u_1=s_1^2=\lambda_1\)
\[X'C=VSU'US=VS^2\]
We start with the turtles data that has 3 continuous variables and a gender variable that we leave out for the original PCA analysis.
When computing the variance covariance matrix, many programs use 1/(n-1) as the denominator, here n=48 so the sum of the variances are off by a small fudge factor of 48/47.
## length width height
## 124.7 95.4 46.3
## Call:
## princomp(x = turtles3var)
##
## Standard deviations:
## Comp.1 Comp.2 Comp.3
## 25.06 2.26 1.94
##
## 3 variables and 48 observations.
## [1] 650
## length width height
## 419.5 160.7 70.4
## length width height
## 20.48 12.68 8.39
## length width height
## length 1.000 0.978 0.965
## width 0.978 1.000 0.961
## height 0.965 0.961 1.000
## Call:
## princomp(x = turtlesc)
##
## Standard deviations:
## Comp.1 Comp.2 Comp.3
## 1.695 0.205 0.145
##
## 3 variables and 48 observations.
The screeplot showing the eigenvalues for the standardized data: one very large component in this case and two very small ones, the data are (almost) one dimensional.
Choose k carefully:
## Warning in readChar(con, 5L, useBytes = TRUE): cannot open compressed file '/
## Users/susan/Books/CUBook/data/screep7.RData', probable reason 'No such file or
## directory'
## Error in readChar(con, 5L, useBytes = TRUE): cannot open the connection
## Error in as.data.frame(df): object 'screep7' not found
## Error in get_eig(X): object 'pcaS7' not found
## Warning: Ignoring unknown aesthetics: x
## Warning: Ignoring unknown aesthetics: x
## Error: geom_hline requires the following missing aesthetics: yintercept
Exercise: How are the following numbers related?
## [1] 6.86 6.86 6.86
## [1] 6.86
## [1] 48
This data set describes 18 lizards as reported by Bauwens and D'iaz-Uriarte (1997). It also gives life-history traits corresponding to these 18 species.
mean.L (mean length (mm)), matur.L (length at maturity (mm)),max.L (maximum length (mm)), hatch.L (hatchling length (mm)),hatch.m (hatchling mass (g)), clutch.S (Clutch size),age.mat (age at maturity (number of months of activity)),clutch.F (clutch frequency).## [1] "traits" "hprA" "hprB"
## mean.L matur.L max.L hatch.L hatch.m clutch.S age.mat clutch.F
## Sa 69.2 58 82 27.8 0.572 6.0 13 1.5
## Sh 48.4 42 56 22.9 0.310 3.2 5 2.0
## Tl 168.4 132 190 42.8 2.235 16.9 19 1.0
## Mc 66.1 56 72 25.0 0.441 7.2 11 1.5
It is always a good idea to check the variables one at a time and two at a time to see what the basic statistics are for the data
## mean.L matur.L max.L hatch.L hatch.m clutch.S age.mat clutch.F
## 71.34 59.39 82.83 26.88 0.56 5.87 10.89 1.56
## mean.L matur.L max.L hatch.L hatch.m clutch.S age.mat clutch.F
## mean.L 1.00 0.99 0.99 0.89 0.94 0.92 0.77 -0.48
## matur.L 0.99 1.00 0.99 0.90 0.92 0.92 0.79 -0.49
## max.L 0.99 0.99 1.00 0.88 0.92 0.91 0.78 -0.51
## hatch.L 0.89 0.90 0.88 1.00 0.96 0.72 0.58 -0.42
## hatch.m 0.94 0.92 0.92 0.96 1.00 0.78 0.64 -0.45
## clutch.S 0.92 0.92 0.91 0.72 0.78 1.00 0.81 -0.55
## age.mat 0.77 0.79 0.78 0.58 0.64 0.81 1.00 -0.62
## clutch.F -0.48 -0.49 -0.51 -0.42 -0.45 -0.55 -0.62 1.00
## Duality diagramm
## class: pca dudi
## $call: dudi.pca(df = tabtraits, scannf = F, nf = 2)
##
## $nf: 2 axis-components saved
## $rank: 8
## eigen values: 6.5 0.83 0.42 0.17 0.045 ...
## vector length mode content
## 1 $cw 8 numeric column weights
## 2 $lw 18 numeric row weights
## 3 $eig 8 numeric eigen values
##
## data.frame nrow ncol content
## 1 $tab 18 8 modified array
## 2 $li 18 2 row coordinates
## 3 $l1 18 2 row normed scores
## 4 $co 8 2 column coordinates
## 5 $c1 8 2 column normed scores
## other elements: cent norm
## [1] 0.81118 0.10387 0.05219 0.02133 0.00563 0.00488 0.00061 0.00031
## m100 long weight highj m400 m110 disc pole javel m1500
## m100 1.0 -0.5 -0.2 -0.1 0.6 0.6 0.0 -0.4 -0.1 0.3
## long -0.5 1.0 0.1 0.3 -0.5 -0.5 0.0 0.3 0.2 -0.4
## weight -0.2 0.1 1.0 0.1 0.1 -0.3 0.8 0.5 0.6 0.3
## highj -0.1 0.3 0.1 1.0 -0.1 -0.3 0.1 0.2 0.1 -0.1
## m400 0.6 -0.5 0.1 -0.1 1.0 0.5 0.1 -0.3 0.1 0.6
## m110 0.6 -0.5 -0.3 -0.3 0.5 1.0 -0.1 -0.5 -0.1 0.1
## disc 0.0 0.0 0.8 0.1 0.1 -0.1 1.0 0.3 0.4 0.4
## pole -0.4 0.3 0.5 0.2 -0.3 -0.5 0.3 1.0 0.3 0.0
## javel -0.1 0.2 0.6 0.1 0.1 -0.1 0.4 0.3 1.0 0.1
## m1500 0.3 -0.4 0.3 -0.1 0.6 0.1 0.4 0.0 0.1 1.0
## [1] 3.42 2.61 0.94 0.88 0.56 0.49 0.43 0.31 0.27 0.10
The screeplot is the first thing to look at, it tells us that it is satifactory to use a two dimensional plot.
The correlation circle made by showing the projection of the old variables onto the two first new principal axes.
## m100 long weight highj m400 m110 disc pole javel m1500
## m100 1.0 0.5 0.2 0.1 0.6 0.6 0.0 0.4 0.1 0.3
## long 0.5 1.0 0.1 0.3 0.5 0.5 0.0 0.3 0.2 0.4
## weight 0.2 0.1 1.0 0.1 -0.1 0.3 0.8 0.5 0.6 -0.3
## highj 0.1 0.3 0.1 1.0 0.1 0.3 0.1 0.2 0.1 0.1
## m400 0.6 0.5 -0.1 0.1 1.0 0.5 -0.1 0.3 -0.1 0.6
## m110 0.6 0.5 0.3 0.3 0.5 1.0 0.1 0.5 0.1 0.1
## disc 0.0 0.0 0.8 0.1 -0.1 0.1 1.0 0.3 0.4 -0.4
## pole 0.4 0.3 0.5 0.2 0.3 0.5 0.3 1.0 0.3 0.0
## javel 0.1 0.2 0.6 0.1 -0.1 0.1 0.4 0.3 1.0 -0.1
## m1500 0.3 0.4 -0.3 0.1 0.6 0.1 -0.4 0.0 -0.1 1.0
## [1] 3.42 2.61 0.94 0.88 0.56 0.49 0.43 0.31 0.27 0.10
Now all the negative correlations are quite small ones. Doing the screeplot over again will show no change in the eigenvalues, the only thing that changes is the sign of loadings for the m variables.
Correlation circle after changing the signs of the running variables.
## Warning: Removed 1 rows containing missing values (geom_text).
## [1] 8488 8399 8328 8306 8286 8272 8216 8189 8180 8167 8143 8114 8093 8083 8036
## [16] 8021 7869 7860 7859 7781 7753 7745 7743 7623 7579 7517 7505 7422 7310 7237
## [31] 7231 7016 6907
(/Users/susan/gitbiobook/BioBook/Chap8-IntroMultivariate/figure/chap8-AthleteScorePCA.png) Scatterplot of the scores given as a supplementary variable and the first principal component, the points are labeled by their order in the data set.
######center and scale the data
###(they have already had variance normalization applied to them)
res.Msig3=dudi.pca(Msig3transp,center=TRUE,scale=TRUE,scannf=F,nf=4)
screeplot(res.Msig3,main="")## celltypes
## EFF MEM NAI
## 10 9 11
status=factor(substr(rownames(Msig3transp),1,3))
require(ggplot2)
gg <- cbind(res.Msig3$li,Cluster=celltypes)
gg <- cbind(sample=rownames(gg),gg)
ggplot(gg, aes(x=Axis1, y=Axis2)) +
geom_point(aes(color=factor(Cluster)),size=5) +
geom_hline(yintercept=0,linetype=2) +
geom_vline(xintercept=0,linetype=2) +
scale_color_discrete(name="Cluster") +
coord_fixed()+ ylim(-8,+8)## <ScaleContinuousPosition>
## Range:
## Limits: -14 -- 18
PCA of gene expression for a subset of 156 genes involved in specificities of each of the three separate T cell types: effector, naive and memory
Example from paper: Kashnap et al, PNAS, 2013
## PCA Example
require(ade4)
require(ggplot2)
load(url("http://bios221.stanford.edu/data/logtmat.RData"))
pca.result=dudi.pca(logtmat, scannf=F,nf=3)
labs=rownames(pca.result$li)
nos=substr(labs,3,4)
type=as.factor(substr(labs,1,2))
kos=which(type=="ko")
wts=which(type=="wt")
pcs=data.frame(Axis1=pca.result$li[,1],Axis2=pca.result$li[,2],labs,type)
pcsplot=ggplot(pcs,aes(x=Axis1,y=Axis2,label=labs,group=nos,colour=type)) +
geom_text(size=4,vjust=-0.5) + geom_point()
pcsplot + geom_hline(yintercept=0,linetype=2) +coord_fixed() + ylim(-12,18) +
geom_vline(xintercept=0,linetype=2)Phylochip data allowed us to discover a batch effect (phylochip).
Phylochip data for three different batches and two different arrays, first principal plane explains 66% of the total variation.
Sometimes we want to see variability between different groups or observations but need to weight them. This can happen when wanting to summarize data for heterogeneous groups for which do not have equal sizes. Let’s do this for the specific example
of the Hiiragi (Ohnishi2014) data we saw in Lectures 3 and 5 and show how reweighting is relevant here.
library("Hiiragi2013")
set.seed(2013)
data("x")
FGF4probes = (fData(x)$symbol == "Fgf4")
groups = split(seq_len(ncol(x)), pData(x)$sampleGroup)
safeSelect = function(grpnames){
stopifnot(all(grpnames %in% names(groups)))
unlist(groups[grpnames])
}
g = safeSelect(c("E3.25",
"E3.5 (EPI)", "E3.5 (PE)",
"E4.5 (EPI)", "E4.5 (PE)"))
nfeatures = 100
varianceOrder = order(rowVars(exprs(x[, g])), decreasing = TRUE)
varianceOrder = setdiff(varianceOrder, which(FGF4probes))
selectedFeatures = varianceOrder[seq_len(nfeatures)]
sampleColourMap = setNames(unique(pData(x)$sampleColour), unique(pData(x)$sampleGroup))
xwt = x[selectedFeatures, g]
tab = table(xwt$sampleGroup)
tab##
## E3.25 E3.5 (EPI) E3.5 (PE) E4.5 (EPI) E4.5 (PE)
## 36 11 11 4 4
We want to do a PCA on 66 points from the wild type genotype data, but the groups are not equally represented, so we will reweight them to even out the representations.
selectedSamples = with(pData(x), genotype=="WT")
xe = x[, selectedSamples]
##To account for the different numbers in the groups, we reweight the samples
wt=c(rep(1,36), rep(36/11,11),rep(36/11,11),rep(36/4,4),rep(36/4,4))
length(wt)## [1] 66
library("factoextra")
## reweighted of groups using dudi.pca
resPCAD=dudi.pca(dfx,row.w=wt,center=TRUE,scale=TRUE,nf=2,scannf=F)
fviz_eig(resPCAD) %#fviz_pca_ind(resPCAD,geom = c(“point”),habillage=xwt$sampleGroup) %##To use the special in house colours
conscious preprocessing, to make their variances comparable and their centers at the origin.more informative variables which are linear combinations of the old ones.## Loading required package: ggbiplot
## Warning in library(package, lib.loc = lib.loc, character.only = TRUE,
## logical.return = TRUE, : there is no package called 'ggbiplot'
## Warning in data(wine): data set 'wine' not found
## Error in eval(expr, envir, enclos): object 'wine' not found
## Error in cor(wine): object 'wine' not found
## Error in prcomp(wine, scale. = TRUE): object 'wine' not found
## Error in eval(quote(list(...)), env): object 'wine.class' not found
## Error in .is_grouping_var(habillage): object 'wine.class' not found
We’ll see later when we look at Microbiome data that sometimes, this projection can be problematic.
## Inertia information:
## Call: inertia.dudi(x = res.ath, col.inertia = TRUE)
##
## Decomposition of total inertia:
## inertia cum cum(%)
## Ax1 3.4182 3.418 34.18
## Ax2 2.6064 6.025 60.25
## Ax3 0.9433 6.968 69.68
## Ax4 0.8780 7.846 78.46
## Ax5 0.5566 8.403 84.03
## Ax6 0.4912 8.894 88.94
## Ax7 0.4306 9.324 93.24
## Ax8 0.3068 9.631 96.31
## Ax9 0.2669 9.898 98.98
## Ax10 0.1019 10.000 100.00
##
## Column contributions (%):
## m100 long weight highj m400 m110 disc pole javel m1500
## 10 10 10 10 10 10 10 10 10 10
##
## Column absolute contributions (%):
## Axis1(%) Axis2(%)
## m100 17.296 2.21439
## long 15.528 2.31288
## weight 7.242 23.38084
## highj 4.506 0.07783
## m400 12.663 12.40165
## m110 18.791 0.48397
## disc 3.090 25.33458
## pole 14.752 2.23748
## javel 3.238 13.83520
## m1500 2.895 17.72118
##
## Signed column relative contributions:
## Axis1 Axis2
## m100 -59.121 -5.7716
## long -53.077 -6.0283
## weight -24.754 60.9397
## highj -15.404 0.2029
## m400 -43.284 -32.3236
## m110 -64.231 -1.2614
## disc -10.563 66.0319
## pole -50.426 5.8317
## javel -11.068 36.0600
## m1500 -9.895 -46.1884
##
## Cumulative sum of column relative contributions (%):
## Axis1 Axis1:2 Axis3:10
## m100 59.121 64.89 35.11
## long 53.077 59.11 40.89
## weight 24.754 85.69 14.31
## highj 15.404 15.61 84.39
## m400 43.284 75.61 24.39
## m110 64.231 65.49 34.51
## disc 10.563 76.60 23.40
## pole 50.426 56.26 43.74
## javel 11.068 47.13 52.87
## m1500 9.895 56.08 43.92
Contributions are printed in 1/10000 and the sign is the sign of the coordinate.